2 research outputs found

    Use of Task-Relevant Spoken Word Stimuli in an Auditory Brain-Computer Interface

    Get PDF
    Auditory brain-computer interfaces (aBCI) may be an effective solution for communication in cases of severely locked-in, late stage ALS (Lou Gehrig’s disease) and upper spinal cord injury patients who are otherwise not candidates for implanted electrodes. Feasibility of auditory BCI has been shown for both healthy participants, (Hill et al., 2004), and impaired populations (Sellers and Donchin, 2006). (Hill et al., 2014) found similar BCI performance in healthy participants and those with locked-in syndrome in a paradigm comparing words to pure tone stimuli. Additional BCI research has explored variations to augment P300 signals for use in speller paradigms, including more meaningful auditory stimuli (Klobassa et al., 2009; Furdea et al., 2009; Simon et al., 2014). It has been recognized in these studies that end users would much prefer natural sounds over a repeated tone stimulus. All of these systems required an association of sound with target stimuli, typically enforced by a visual support matrix. These systems would not be usable by the target end users of an auditory BCI. At- tempts to remove the need for visual referencing by investigating a BCI system with serial presentation of spoken letter streams as stimuli (Hoehne and Tangermann, 2014) or spoken words (Ferracuti et al., 2013) has met with limited success but presents a potential high speed communication solutions. The present study highlights a method of using BCI task relevant spoken word stimuli to eliminate visually presented references. By utilizing spoken word stimuli, a BCI system could utilize a range of stimuli equivalent to the size of the users vocabulary and provide faster communication out- put than spelling systems. As a control, spoken word stimuli that have no task specific relevance are also tested. Stimuli audio-spatial cues have shown significant improvements in aBCI performance (Käthner et al., 2013; Schreuder et al., 2011). The present study specifically evaluates the potential improvements to BCI performance of semantic and audio-spatial relevance by eliciting auditory oddball P300 responses to task relevant directional stimuli (spoken words: ‘front’, ‘back’, ‘left’, ‘right’). Participants completed several trials of a motivational game with directionally relevant targets over two experimental sessions. Offline analysis of training data was accomplished to evaluate the impact of stimulus characteristics on BCI performance. Questionnaire results on workload, motivation and system usability accurately reflected participant’s BCI performance. A behavioral button press study was utilized to further investigate the influence of spatial cues used in the paradigm, but also highlighted differences in the semantic relevance of the stimuli. Behavioral results correlated with BCI performance. The results of this study indicate task relevant stimuli are a viable option for eliminating artificial and visual stimulus references. This study’s results highlight several considerations for future auditory BCI studies, including: classifier selection, hearing threshold importance, aid of behavioral correlates to BCI performance and use of spatially separated spoken word stimuli

    A Noninvasive Brain-Computer Interface for Real-Time Speech Synthesis: The Importance of Multimodal Feedback.

    Get PDF
    We conducted a study of a motor imagery brain-computer interface (BCI) using electroencephalography to continuously control a formant frequency speech synthesizer with instantaneous auditory and visual feedback. Over a three-session training period, sixteen participants learned to control the BCI for production of three vowel sounds (/ textipa i/ [heed], / textipa A/ [hot], and / textipa u/ [who'd]) and were split into three groups: those receiving unimodal auditory feedback of synthesized speech, those receiving unimodal visual feedback of formant frequencies, and those receiving multimodal, audio-visual (AV) feedback. Audio feedback was provided by a formant frequency artificial speech synthesizer, and visual feedback was given as a 2-D cursor on a graphical representation of the plane defined by the first two formant frequencies. We found that combined AV feedback led to the greatest performance in terms of percent accuracy, distance to target, and movement time to target compared with either unimodal feedback of auditory or visual information. These results indicate that performance is enhanced when multimodal feedback is meaningful for the BCI task goals, rather than as a generic biofeedback signal of BCI progress
    corecore